# DPT architecture

Depth Anything V2 Metric Outdoor Large Hf
Apache-2.0
A fine-tuned version of Depth Anything V2 for outdoor metric depth estimation tasks, trained on the synthetic dataset Virtual KITTI
3D Vision Transformers
D
depth-anything
3,662
6
Coreml Depth Anything V2 Small
Apache-2.0
Depth Anything V2 is a depth estimation model based on the DPT architecture, utilizing a DINOv2 backbone network. It achieves fine and robust depth prediction through training on large-scale synthetic and real-world data.
3D Vision
C
apple
67
58
Coreml Depth Anything Small
Apache-2.0
Depth Anything is a depth estimation model based on the DPT architecture and DINOv2 backbone network, trained on approximately 62 million images, achieving state-of-the-art results in relative and absolute depth estimation tasks.
3D Vision
C
apple
51
36
Zoedepth Nyu
MIT
ZoeDepth is a model for monocular depth estimation, specifically fine-tuned on the NYU dataset, capable of zero-shot transfer and metric depth estimation.
3D Vision Transformers
Z
Intel
1,279
1
Depth Anything Large Hf
Apache-2.0
Depth Anything is a depth estimation model based on the DPT architecture and DINOv2 backbone network, trained on approximately 62 million images, achieving state-of-the-art results in both relative and absolute depth estimation tasks.
3D Vision Transformers
D
LiheYoung
147.17k
51
Depth Anything Base Hf
Apache-2.0
Depth Anything is a depth estimation model based on the DPT architecture and DINOv2 backbone network, trained on approximately 62 million images, achieving state-of-the-art performance in zero-shot depth estimation.
3D Vision Transformers
D
LiheYoung
4,101
10
Depth Anything Small Hf
Apache-2.0
Depth Anything is a depth estimation model based on the DPT architecture, utilizing the DINOv2 backbone network. It was trained on approximately 62 million images and excels in both relative and absolute depth estimation tasks.
3D Vision Transformers
D
LiheYoung
97.89k
29
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase